32 research outputs found

    Beat synchronization across the lifespan: intersection of development and musical experience

    Get PDF
    Rhythmic entrainment, or beat synchronization, provides an opportunity to understand how multiple systems operate together to integrate sensory-motor information. Also, synchronization is an essential component of musical performance that may be enhanced through musical training. Investigations of rhythmic entrainment have revealed a developmental trajectory across the lifespan, showing synchronization improves with age and musical experience. Here, we explore the development and maintenance of synchronization in childhood through older adulthood in a large cohort of participants (N = 145), and also ask how it may be altered by musical experience. We employed a uniform assessment of beat synchronization for all participants and compared performance developmentally and between individuals with and without musical experience. We show that the ability to consistently tap along to a beat improves with age into adulthood, yet in older adulthood tapping performance becomes more variable. Also, from childhood into young adulthood, individuals are able to tap increasingly close to the beat (i.e., asynchronies decline with age), however, this trend reverses from younger into older adulthood. There is a positive association between proportion of life spent playing music and tapping performance, which suggests a link between musical experience and auditory-motor integration. These results are broadly consistent with previous investigations into the development of beat synchronization across the lifespan, and thus complement existing studies and present new insights offered by a different, large cross-sectional sample

    Comprehending comics and Graphic Novels: Watchmen as a Case for Cognition

    Get PDF
    Research on the cognitive activities that underlie text processing has revealed a variety of influences on and consequences of our reading experiences. To date, though, cognitive psychologists who study reading processes have mainly focused on how people perceive and comprehend written text. We discuss herein some of the ways in which comic book comprehension might align with, and differ from, text-only comprehension. We use Watchmen (Moore & Gibbons, 1986-1987) as a case example to highlight how cues in the graphic novel, readers’ background knowledge, and inferences derived from the interactions of those cues and knowledge serve to support and enhance comic reading experiences. Such an analysis helps extend the purview of models of comprehension by considering how readers process materials that include both texts and graphics, of which comic book materials are an important exemplar

    Music training enhances the automatic neural processing of foreign speech sounds

    No full text
    International audienceGrowing evidence shows that music and language experience affect the neural processing of speech sounds throughout the auditory system. Recent work mainly focused on the benefits induced by musical practice on the processing of native language or tonal foreign language, which rely on pitch processing. The aim of the present study was to take this research a step further by investigating the effect of music training on processing English sounds by foreign listeners. We recorded subcortical electrophysiological responses to an English syllable in three groups of participants: native speakers, non-native nonmusicians, and non-native musicians. Native speakers had enhanced neural processing of the formant frequencies of speech, compared to non-native nonmusicians, suggesting that automatic encoding of these relevant speech cues are sensitive to language experience. Most strikingly, in non-native musicians, neural responses to the formant frequencies did not differ from those of native speakers, suggesting that musical training may compensate for the lack of language experience by strengthening the neural encoding of important acoustic information. Language and music experience seem to induce a selective sensory gain along acoustic dimensions that are functionally-relevant-here, formant frequencies that are crucial for phoneme discrimination. Music and language are universals of human culture, and both require the perception, manipulation, and production of complex sound sequences. These sequences are hierarchically organized (syllables, words, sentences in speech and notes, beats and phrases in music) and their decoding requires an efficient representation of rapidly evolving sound cues, selection of relevant information, construction of temporary structures taking into account syntactic rules, and many other cognitive functions. It is thus not surprising that music and speech processing share common neural resources 1-4 , although some resources may be distinct 5. The acoustic and structural similarities as well as the shared neural networks between speech and music suggest that cognitive and perceptual abilities transfer from one domain to the other via the reorganization of common neural circuits 2. This hypothesis has been verified by showing that musical practice not only improves music sound processing 6-9 , but also enhances several levels of speech processing, including the perception of prosody 10 , consonant contrasts 11 , speech segmentation 12 and syntactic processing 13. Interestingly, these findings extend to the subcortical level, showing an enhancement of the neural representations of the pitch, timbre, and timing of speech sounds by musical practice 14. Subcortical responses to speech are more robust to noise in musicians than non-musicians, and this neural advantage correlates with better abilities to perceive speech in noisy background 15. Overall, these studies suggest that the perceptual advantages induced by intensive music training rely on an enhancement of the neural coding of sounds, in both cortical and subcortical structures and extending to speech sounds. Interestingly, musical experience has also been associated with better perception and production of sounds in foreign languages 16-18. At the cortical level, the slight pitch variations of both musical (i.e. harmonic sounds) and non-native speech syllables (i.e. Mandarin tones) evoke larger mismatch negativity (MMN) responses in non-native musicians as compared to non-native nonmusicians 17,19. At the subcortical level, Wong and colleagues (2007) have shown that American musicians have more faithful neural representation of the rapid variations of the pitch of Mandarin tone contours as compared to American non-musicians 20. Moreover, this advantage correlates with the amount of musical experience

    Native language shapes automatic neural processing of speech

    No full text
    International audienceThe development of the phoneme inventory is driven by the acoustic-phonetic properties of one's native language. Neural representation of speech is known to be shaped by language experience, as indexed by cortical responses, and recent studies suggest that subcortical processing also exhibits this early attunement to native language. However, the majority of work to date has focused on the differences between tonal and non-tonal languages that use pitch variations to convey phonemic categories. The aim of this cross-language study is to determine whether subcortical encoding of speech sounds is sensitive to language experience by comparing native speakers of two non-tonal languages (French and English). We hypothesized that neural representations would be more robust and fine-grained for speech sounds that belong to the native phonemic inventory of the listener, and especially for the dimensions that are phonetically relevant to the listener such as high frequency components. We recorded neural responses of American English and French native speakers, listening to natural syllables of both languages. Results showed that, independently of the stimulus, American participants exhibited greater neural representation of the fundamental frequency compared to French participants, consistent with the importance of the fundamental frequency to convey stress patterns in English. Furthermore, participants showed more robust encoding and more precise spectral representations of the harmonics when listening to the syllable of their native language as compared to non-native language. These results are consistent with the hypothesis that language experience shapes early sensory processing of speech and that this plasticity occurs as a function of what is behaviorally-relevant to a listener

    Reversal of age-related neural timing delays with training

    No full text

    Beat Synchronization Changes Throughout Life.

    No full text
    <p>(A) The ability to tap to a beat improves with age into middle adulthood (ages 22 to 42.9), and then declines in older age, as assessed by tapping variability. (B) Anticipation of the beat was least accurate for children and older adults, as assessed by asynchrony. Error bars represent one standard error of the mean.</p
    corecore